44 research outputs found

    Unmixing Binocular Signals

    Get PDF
    Incompatible images presented to the two eyes lead to perceptual oscillations in which one image at a time is visible. Early models portrayed this binocular rivalry as involving reciprocal inhibition between monocular representations of images, occurring at an early visual stage prior to binocular mixing. However, psychophysical experiments found conditions where rivalry could also occur at a higher, more abstract level of representation. In those cases, the rivalry was between image representations dissociated from eye-of-origin information, rather than between monocular representations from the two eyes. Moreover, neurophysiological recordings found the strongest rivalry correlate in inferotemporal cortex, a high-level, predominantly binocular visual area involved in object recognition, rather than early visual structures. An unresolved issue is how can the separate identities of the two images be maintained after binocular mixing in order for rivalry to be possible at higher levels? Here we demonstrate that after the two images are mixed, they can be unmixed at any subsequent stage using a physiologically plausible non-linear signal-processing algorithm, non-negative matrix factorization, previously proposed for parsing object parts during object recognition. The possibility that unmixed left and right images can be regenerated at late stages within the visual system provides a mechanism for creating various binocular representations and interactions de novo in different cortical areas for different purposes, rather than inheriting then from early areas. This is a clear example how non-linear algorithms can lead to highly non-intuitive behavior in neural information processing

    No binocular rivalry in the LGN of alert macaque monkeys

    Get PDF
    AbstractOrthogonal drifting gratings were presented binocularly to alert macaque monkeys in an attempt to find neural correlates of binocular rivalry. Gratings were centered over lateral geniculate nucleus (LGN) receptive fields and the corresponding points for the opposite eye. The only task of the monkey was to fixate. We found no difference between the responses of LGN neurons under rivalrous and nonrivalrous conditions, as determined by examining the ratios of their respective power spectra. There was, however, a curious “temporal afterimage” effect in which cell responses continued to be modulated at the drift frequency of the grating for several seconds after the grating disappeared

    Population Coding of Visual Space: Comparison of Spatial Representations in Dorsal and Ventral Pathways

    Get PDF
    Although the representation of space is as fundamental to visual processing as the representation of shape, it has received relatively little attention from neurophysiological investigations. In this study we characterize representations of space within visual cortex, and examine how they differ in a first direct comparison between dorsal and ventral subdivisions of the visual pathways. Neural activities were recorded in anterior inferotemporal cortex (AIT) and lateral intraparietal cortex (LIP) of awake behaving monkeys, structures associated with the ventral and dorsal visual pathways respectively, as a stimulus was presented at different locations within the visual field. In spatially selective cells, we find greater modulation of cell responses in LIP with changes in stimulus position. Further, using a novel population-based statistical approach (namely, multidimensional scaling), we recover the spatial map implicit within activities of neural populations, allowing us to quantitatively compare the geometry of neural space with physical space. We show that a population of spatially selective LIP neurons, despite having large receptive fields, is able to almost perfectly reconstruct stimulus locations within a low-dimensional representation. In contrast, a population of AIT neurons, despite each cell being spatially selective, provide less accurate low-dimensional reconstructions of stimulus locations. They produce instead only a topologically (categorically) correct rendition of space, which nevertheless might be critical for object and scene recognition. Furthermore, we found that the spatial representation recovered from population activity shows greater translation invariance in LIP than in AIT. We suggest that LIP spatial representations may be dimensionally isomorphic with 3D physical space, while in AIT spatial representations may reflect a more categorical representation of space (e.g., “next to” or “above”)

    Population Coding of Visual Space: Modeling

    Get PDF
    We examine how the representation of space is affected by receptive field (RF) characteristics of the encoding population. Spatial responses were defined by overlapping Gaussian RFs. These responses were analyzed using multidimensional scaling to extract the representation of global space implicit in population activity. Spatial representations were based purely on firing rates, which were not labeled with RF characteristics (tuning curve peak location, for example), differentiating this approach from many other population coding models. Because responses were unlabeled, this model represents space using intrinsic coding, extracting relative positions amongst stimuli, rather than extrinsic coding where known RF characteristics provide a reference frame for extracting absolute positions. Two parameters were particularly important: RF diameter and RF dispersion, where dispersion indicates how broadly RF centers are spread out from the fovea. For large RFs, the model was able to form metrically accurate representations of physical space on low-dimensional manifolds embedded within the high-dimensional neural population response space, suggesting that in some cases the neural representation of space may be dimensionally isomorphic with 3D physical space. Smaller RF sizes degraded and distorted the spatial representation, with the smallest RF sizes (present in early visual areas) being unable to recover even a topologically consistent rendition of space on low-dimensional manifolds. Finally, although positional invariance of stimulus responses has long been associated with large RFs in object recognition models, we found RF dispersion rather than RF diameter to be the critical parameter. In fact, at a population level, the modeling suggests that higher ventral stream areas with highly restricted RF dispersion would be unable to achieve positionally-invariant representations beyond this narrow region around fixation

    Towards building a more complex view of the lateral geniculate nucleus: Recent advances in understanding its role

    Get PDF
    The lateral geniculate nucleus (LGN) has often been treated in the past as a linear filter that adds little to retinal processing of visual inputs. Here we review anatomical, neurophysiological, brain imaging, and modeling studies that have in recent years built up a much more complex view of LGN . These include effects related to nonlinear dendritic processing, cortical feedback, synchrony and oscillations across LGN populations, as well as involvement of LGN in higher level cognitive processing. Although recent studies have provided valuable insights into early visual processing including the role of LGN, a unified model of LGN responses to real-world objects has not yet been developed. In the light of recent data, we suggest that the role of LGN deserves more careful consideration in developing models of high-level visual processing

    Characteristics of Eye-Position Gain Field Populations Determine Geometry of Visual Space

    Get PDF
    We have previously demonstrated differences in eye-position spatial maps for anterior inferotemporal cortex (AIT) in the ventral stream and lateral intraparietal cortex (LIP) in the dorsal stream, based on population decoding of gaze angle modulations of neural visual responses (i.e., eye-position gain fields). Here we explore the basis of such spatial encoding differences through modeling of gain field characteristics. We created a population of model neurons, each having a different eye-position gain field. This population was used to reconstruct eye-position visual space using multidimensional scaling. As gain field shapes have never been well established experimentally, we examined different functions, including planar, sigmoidal, elliptical, hyperbolic, and mixtures of those functions. All functions successfully recovered positions, indicating weak constraints on allowable gain field shapes. We then used a genetic algorithm to modify the characteristics of model gain field populations until the recovered spatial maps closely matched those derived from monkey neurophysiological data in AIT and LIP. The primary differences found between model AIT and LIP gain fields were that AIT gain fields were more foveally dominated. That is, gain fields in AIT operated on smaller spatial scales and smaller dispersions than in LIP. Thus we show that the geometry of eye-position visual space depends on the population characteristics of gain fields, and that differences in gain field characteristics for different cortical areas may underlie differences in the representation of space

    Spatial Modulation of Primate Inferotemporal Responses by Eye Position

    Get PDF
    Background: A key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information. Methodology/Principal Findings: We examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40 % of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity. Conclusions/Significance: These data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventra

    not all categories work the same way

    No full text

    Bayesian Estimation of Stimulus Responses in Poisson Spike Trains

    No full text
    corecore